Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Kidney360 ; 2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38664867

RESUMEN

BACKGROUND: CKD is often underdiagnosed during early stages when GFR is preserved due to underutilization of testing for quantitative urine albumin-to-creatinine ratio (UACR) or protein-to-creatinine ratio (UPCR). Semi-quantitative dipstick proteinuria (DSP) on urinalysis is widely obtained but not accurate for identifying clinically significant proteinuria. METHODS: We identified all patients with a urinalysis and UACR or UPCR obtained on the same day at a tertiary referral center. The accuracy of DSP alone or in combination with specific gravity against a gold-standard of UACR ≥30 mg/g or UPCR ≥0.15 g/g, characterizing clinically significant proteinuria, was evaluated using logistic regression. Models were internally validated using 10-fold cross validation. The specific gravity for each DSP above which significant proteinuria is unlikely was determined. RESULTS: Of 11,229 patients, clinically significant proteinuria was present in 4,073 (36%). The area under the receiver operating characteristic curve (95% confidence interval) was 0.77 (0.76, 0.77) using DSP alone and 0.82 (0.82, 0.83) in combination with specific gravity (P<0.001), yielding a specificity of 0.93 (standard error, SE=0.02) and positive likelihood ratio of 9.52 (SE=0.85). The optimal specific gravity cut-offs to identify significant proteinuria were ≤1.0012, 1.0238, and 1.0442, for DSP of trace, 30, and 100 mg/dL. At any specific gravity, a DSP ≥300 mg/dL was extremely likely to represent significant proteinuria. CONCLUSION: Adding specific gravity to DSP improves recognition of clinically significant proteinuria and can be easily used to identify patients with early-stage CKD who may not have otherwise received a quantified proteinuria measurement for both clinical and research purposes.

2.
J Investig Med ; 71(5): 459-464, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36786195

RESUMEN

We previously developed and validated a model to predict acute kidney injury (AKI) in hospitalized coronavirus disease 2019 (COVID-19) patients and found that the variables with the highest importance included a history of chronic kidney disease and markers of inflammation. Here, we assessed model performance during periods when COVID-19 cases were attributable almost exclusively to individual variants. Electronic Health Record data were obtained from patients admitted to 19 hospitals. The outcome was hospital-acquired AKI. The model, previously built in an Inception Cohort, was evaluated in Delta and Omicron cohorts using model discrimination and calibration methods. A total of 9104 patients were included, with 5676 in the Inception Cohort, 2461 in the Delta cohort, and 967 in the Omicron cohort. The Delta Cohort was younger with fewer comorbidities, while Omicron patients had lower rates of intensive care compared with the other cohorts. AKI occurred in 13.7% of the Inception Cohort, compared with 13.8% of Delta and 14.4% of Omicron (Omnibus p = 0.84). Compared with the Inception Cohort (area under the curve (AUC): 0.78, 95% confidence interval (CI): 0.76-0.80), the model showed stable discrimination in the Delta (AUC: 0.78, 95% CI: 0.75-0.80, p = 0.89) and Omicron (AUC: 0.74, 95% CI: 0.70-0.79, p = 0.37) cohorts. Estimated calibration index values were 0.02 (95% CI: 0.01-0.07) for Inception, 0.08 (95% CI: 0.05-0.17) for Delta, and 0.12 (95% CI: 0.04-0.47) for Omicron cohorts, p = 0.10 for both Delta and Omicron vs Inception. Our model for predicting hospital-acquired AKI remained accurate in different COVID-19 variants, suggesting that risk factors for AKI have not substantially evolved across variants.


Asunto(s)
Lesión Renal Aguda , COVID-19 , Humanos , SARS-CoV-2 , Lesión Renal Aguda/epidemiología , Hospitales
3.
Appl Clin Inform ; 13(5): 1123-1130, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36167337

RESUMEN

OBJECTIVES: We characterized real-time patient portal test result viewing among emergency department (ED) patients and described patient characteristics overall and among those not enrolled in the portal at ED arrival. METHODS: Our observational study at an academic ED used portal log data to trend the proportion of adult patients who viewed results during their visit from May 04, 2021 to April 04, 2022. Correlation was assessed visually and with Kendall's τ. Covariate analysis using binary logistic regression assessed result(s) viewed as a function of time accounting for age, sex, ethnicity, race, language, insurance status, disposition, and social vulnerability index (SVI). A second model only included patients not enrolled in the portal at arrival. We used random forest imputation to account for missingness and Huber-White heteroskedasticity-robust standard errors for patients with multiple encounters (α = 0.05). RESULTS: There were 60,314 ED encounters (31,164 unique patients). In 7,377 (12.2%) encounters, patients viewed results while still in the ED. Patients were not enrolled for portal use at arrival in 21,158 (35.2%) encounters, and 927 (4.4% of not enrolled, 1.5% overall) subsequently enrolled and viewed results in the ED. Visual inspection suggests an increasing proportion of patients who viewed results from roughly 5 to 15% over the study (Kendall's τ = 0.61 [p <0.0001]). Overall and not-enrolled models yielded concordance indices (C) of 0.68 and 0.72, respectively, with significant overall likelihood ratio χ 2 (p <0.0001). Time was independently associated with viewing results in both models after adjustment. Models revealed disparate use between age, race, ethnicity, SVI, sex, insurance status, and disposition groups. CONCLUSION: We observed increased portal-based test result viewing among ED patients over the year since the 21st Century Cures act went into effect, even among those not enrolled at arrival. We observed disparities in those who viewed results.


Asunto(s)
Portales del Paciente , Adulto , Humanos , Servicio de Urgencia en Hospital , Modelos Logísticos , Estudios Retrospectivos
4.
J Am Heart Assoc ; 11(11): e024094, 2022 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-35656988

RESUMEN

Background The WATCH-DM (weight [body mass index], age, hypertension, creatinine, high-density lipoprotein cholesterol, diabetes control [fasting plasma glucose], ECG QRS duration, myocardial infarction, and coronary artery bypass grafting) and TRS-HFDM (Thrombolysis in Myocardial Infarction [TIMI] risk score for heart failure in diabetes) risk scores were developed to predict risk of heart failure (HF) among individuals with type 2 diabetes. WATCH-DM was developed to predict incident HF, whereas TRS-HFDM predicts HF hospitalization among patients with and without a prior HF history. We evaluated the model performance of both scores to predict incident HF events among patients with type 2 diabetes and no history of HF hospitalization across different cohorts and clinical settings with varying baseline risk. Methods and Results Incident HF risk was estimated by the integer-based WATCH-DM and TRS-HFDM scores in participants with type 2 diabetes free of baseline HF from 2 randomized clinical trials (TECOS [Trial Evaluating Cardiovascular Outcomes With Sitagliptin], N=12 028; and Look AHEAD [Look Action for Health in Diabetes] trial, N=4867). The integer-based WATCH-DM score was also validated in electronic health record data from a single large health care system (N=7475). Model discrimination was assessed by the Harrell concordance index and calibration by the Greenwood-Nam-D'Agostino statistic. HF incidence rate was 7.5, 3.9, and 4.1 per 1000 person-years in the TECOS, Look AHEAD trial, and electronic health record cohorts, respectively. Integer-based WATCH-DM and TRS-HFDM scores had similar discrimination and calibration for predicting 5-year HF risk in the Look AHEAD trial cohort (concordance indexes=0.70; Greenwood-Nam-D'Agostino P>0.30 for both). Both scores had lower discrimination and underpredicted HF risk in the TECOS cohort (concordance indexes=0.65 and 0.66, respectively; Greenwood-Nam-D'Agostino P<0.001 for both). In the electronic health record cohort, the integer-based WATCH-DM score demonstrated a concordance index of 0.73 with adequate calibration (Greenwood-Nam-D'Agostino P=0.96). TRS-HFDM score could not be validated in the electronic health record because of unavailability of data on urine albumin/creatinine ratio in most patients in the contemporary clinical practice. Conclusions The WATCH-DM and TRS-HFDM risk scores can discriminate risk of HF among intermediate-risk populations with type 2 diabetes.


Asunto(s)
Diabetes Mellitus Tipo 2 , Insuficiencia Cardíaca , Infarto del Miocardio , Adulto , Creatinina , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 2/epidemiología , Insuficiencia Cardíaca/complicaciones , Insuficiencia Cardíaca/diagnóstico , Insuficiencia Cardíaca/epidemiología , Hospitalización , Humanos , Infarto del Miocardio/epidemiología , Medición de Riesgo/métodos , Factores de Riesgo
5.
Kidney Med ; 4(6): 100463, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35434597

RESUMEN

Rationale & Objective: Acute kidney injury (AKI) is common in patients hospitalized with COVID-19, but validated, predictive models for AKI are lacking. We aimed to develop the best predictive model for AKI in hospitalized patients with coronavirus disease 2019 and assess its performance over time with the emergence of vaccines and the Delta variant. Study Design: Longitudinal cohort study. Setting & Participants: Hospitalized patients with a positive severe acute respiratory syndrome coronavirus 2 polymerase chain reaction result between March 1, 2020, and August 20, 2021 at 19 hospitals in Texas. Exposures: Comorbid conditions, baseline laboratory data, inflammatory biomarkers. Outcomes: AKI defined by KDIGO (Kidney Disease: Improving Global Outcomes) creatinine criteria. Analytical Approach: Three nested models for AKI were built in a development cohort and validated in 2 out-of-time cohorts. Model discrimination and calibration measures were compared among cohorts to assess performance over time. Results: Of 10,034 patients, 5,676, 2,917, and 1,441 were in the development, validation 1, and validation 2 cohorts, respectively, of whom 776 (13.7%), 368 (12.6%), and 179 (12.4%) developed AKI, respectively (P = 0.26). Patients in the validation cohort 2 had fewer comorbid conditions and were younger than those in the development cohort or validation cohort 1 (mean age, 54 ± 16.8 years vs 61.4 ± 17.5 and 61.7 ± 17.3 years, respectively, P < 0.001). The validation cohort 2 had higher median high-sensitivity C-reactive protein level (81.7 mg/L) versus the development cohort (74.5 mg/L; P < 0.01) and higher median ferritin level (696 ng/mL) versus both the development cohort (444 ng/mL) and validation cohort 1 (496 ng/mL; P < 0.001). The final model, which added high-sensitivity C-reactive protein, ferritin, and D-dimer levels, had an area under the curve of 0.781 (95% CI, 0.763-0.799). Compared with the development cohort, discrimination by area under the curve (validation 1: 0.785 [0.760-0.810], P = 0.79, and validation 2: 0.754 [0.716-0.795], P = 0.53) and calibration by estimated calibration index (validation 1: 0.116 [0.041-0.281], P = 0.11, and validation 2: 0.081 [0.045-0.295], P = 0.11) showed stable performance over time. Limitations: Potential billing and coding bias. Conclusions: We developed and externally validated a model to accurately predict AKI in patients with coronavirus disease 2019. The performance of the model withstood changes in practice patterns and virus variants.

6.
BMC Nephrol ; 23(1): 50, 2022 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-35105331

RESUMEN

BACKGROUND: Acute kidney injury (AKI) is a common complication in patients hospitalized with COVID-19 and may require renal replacement therapy (RRT). Dipstick urinalysis is frequently obtained, but data regarding the prognostic value of hematuria and proteinuria for kidney outcomes is scarce. METHODS: Patients with positive severe acute respiratory syndrome-coronavirus 2 (SARS-CoV2) PCR, who had a urinalysis obtained on admission to one of 20 hospitals, were included. Nested models with degree of hematuria and proteinuria were used to predict AKI and RRT during admission. Presence of Chronic Kidney Disease (CKD) and baseline serum creatinine were added to test improvement in model fit. RESULTS: Of 5,980 individuals, 829 (13.9%) developed an AKI during admission, and 149 (18.0%) of those with AKI received RRT. Proteinuria and hematuria degrees significantly increased with AKI severity (P < 0.001 for both). Any degree of proteinuria and hematuria was associated with an increased risk of AKI and RRT. In predictive models for AKI, presence of CKD improved the area under the curve (AUC) (95% confidence interval) to 0.73 (0.71, 0.75), P < 0.001, and adding baseline creatinine improved the AUC to 0.85 (0.83, 0.86), P < 0.001, when compared to the base model AUC using only proteinuria and hematuria, AUC = 0.64 (0.62, 0.67). In RRT models, CKD status improved the AUC to 0.78 (0.75, 0.82), P < 0.001, and baseline creatinine improved the AUC to 0.84 (0.80, 0.88), P < 0.001, compared to the base model, AUC = 0.72 (0.68, 0.76). There was no significant improvement in model discrimination when both CKD and baseline serum creatinine were included. CONCLUSIONS: Proteinuria and hematuria values on dipstick urinalysis can be utilized to predict AKI and RRT in hospitalized patients with COVID-19. We derived formulas using these two readily available values to help prognosticate kidney outcomes in these patients. Furthermore, the incorporation of CKD or baseline creatinine increases the accuracy of these formulas.


Asunto(s)
Lesión Renal Aguda/etiología , COVID-19/complicaciones , Hematuria/diagnóstico , Proteinuria/diagnóstico , Urinálisis/métodos , Lesión Renal Aguda/etnología , Lesión Renal Aguda/terapia , Anciano , Área Bajo la Curva , COVID-19/etnología , Intervalos de Confianza , Creatinina/sangre , Femenino , Hospitalización , Humanos , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Insuficiencia Renal Crónica/diagnóstico , Terapia de Reemplazo Renal/estadística & datos numéricos
7.
Appl Clin Inform ; 12(5): 1074-1081, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34788889

RESUMEN

BACKGROUND: Novel coronavirus disease 2019 (COVID-19) vaccine administration has faced distribution barriers across the United States. We sought to delineate our vaccine delivery experience in the first week of vaccine availability, and our effort to prioritize employees based on risk with a goal of providing an efficient infrastructure to optimize speed and efficiency of vaccine delivery while minimizing risk of infection during the immunization process. OBJECTIVE: This article aims to evaluate an employee prioritization/invitation/scheduling system, leveraging an integrated electronic health record patient portal framework for employee COVID-19 immunizations at an academic medical center. METHODS: We conducted an observational cross-sectional study during January 2021 at a single urban academic center. All employees who met COVID-19 allocation vaccine criteria for phase 1a.1 to 1a.4 were included. We implemented a prioritization/invitation/scheduling framework and evaluated time from invitation to scheduling as a proxy for vaccine interest and arrival to vaccine administration to measure operational throughput. RESULTS: We allotted vaccines for 13,753 employees but only 10,662 employees with an active patient portal account received an invitation. Of those with an active account, 6,483 (61%) scheduled an appointment and 6,251 (59%) were immunized in the first 7 days. About 66% of invited providers were vaccinated in the first 7 days. In contrast, only 41% of invited facility/food service employees received the first dose of the vaccine in the first 7 days (p < 0.001). At the vaccination site, employees waited 5.6 minutes (interquartile range [IQR]: 3.9-8.3) from arrival to vaccination. CONCLUSION: We developed a system of early COVID-19 vaccine prioritization and administration in our health care system. We saw strong early acceptance in those with proximal exposure to COVID-19 but noticed significant difference in the willingness of different employee groups to receive the vaccine.


Asunto(s)
COVID-19 , Vacunación Masiva , Centros Médicos Académicos , Vacunas contra la COVID-19 , Estudios Transversales , Humanos , SARS-CoV-2 , Estados Unidos
8.
JMIR Cardio ; 5(1): e22296, 2021 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-33797396

RESUMEN

BACKGROUND: Professional society guidelines are emerging for cardiovascular care in cancer patients. However, it is not yet clear how effectively the cancer survivor population is screened and treated for cardiomyopathy in contemporary clinical practice. As electronic health records (EHRs) are now widely used in clinical practice, we tested the hypothesis that an EHR-based cardio-oncology registry can address these questions. OBJECTIVE: The aim of this study was to develop an EHR-based pragmatic cardio-oncology registry and, as proof of principle, to investigate care gaps in the cardiovascular care of cancer patients. METHODS: We generated a programmatically deidentified, real-time EHR-based cardio-oncology registry from all patients in our institutional Cancer Population Registry (N=8275, 2011-2017). We investigated: (1) left ventricular ejection fraction (LVEF) assessment before and after treatment with potentially cardiotoxic agents; and (2) guideline-directed medical therapy (GDMT) for left ventricular dysfunction (LVD), defined as LVEF<50%, and symptomatic heart failure with reduced LVEF (HFrEF), defined as LVEF<50% and Problem List documentation of systolic congestive heart failure or dilated cardiomyopathy. RESULTS: Rapid development of an EHR-based cardio-oncology registry was feasible. Identification of tests and outcomes was similar using the EHR-based cardio-oncology registry and manual chart abstraction (100% sensitivity and 83% specificity for LVD). LVEF was documented prior to initiation of cancer therapy in 19.8% of patients. Prevalence of postchemotherapy LVD and HFrEF was relatively low (9.4% and 2.5%, respectively). Among patients with postchemotherapy LVD or HFrEF, those referred to cardiology had a significantly higher prescription rate of a GDMT. CONCLUSIONS: EHR data can efficiently populate a real-time, pragmatic cardio-oncology registry as a byproduct of clinical care for health care delivery investigations.

9.
Appl Clin Inform ; 12(1): 182-189, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33694144

RESUMEN

OBJECTIVE: Clinical decision support (CDS) can contribute to quality and safety. Prior work has shown that errors in CDS systems are common and can lead to unintended consequences. Many CDS systems use Boolean logic, which can be difficult for CDS analysts to specify accurately. We set out to determine the prevalence of certain types of Boolean logic errors in CDS statements. METHODS: Nine health care organizations extracted Boolean logic statements from their Epic electronic health record (EHR). We developed an open-source software tool, which implemented the Espresso logic minimization algorithm, to identify three classes of logic errors. RESULTS: Participating organizations submitted 260,698 logic statements, of which 44,890 were minimized by Espresso. We found errors in 209 of them. Every participating organization had at least two errors, and all organizations reported that they would act on the feedback. DISCUSSION: An automated algorithm can readily detect specific categories of Boolean CDS logic errors. These errors represent a minority of CDS errors, but very likely require correction to avoid patient safety issues. This process found only a few errors at each site, but the problem appears to be widespread, affecting all participating organizations. CONCLUSION: Both CDS implementers and EHR vendors should consider implementing similar algorithms as part of the CDS authoring process to reduce the number of errors in their CDS interventions.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Lógica , Registros Electrónicos de Salud , Humanos , Programas Informáticos
10.
J Am Med Inform Assoc ; 28(5): 899-906, 2021 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-33566093

RESUMEN

OBJECTIVE: The electronic health record (EHR) data deluge makes data retrieval more difficult, escalating cognitive load and exacerbating clinician burnout. New auto-summarization techniques are needed. The study goal was to determine if problem-oriented view (POV) auto-summaries improve data retrieval workflows. We hypothesized that POV users would perform tasks faster, make fewer errors, be more satisfied with EHR use, and experience less cognitive load as compared with users of the standard view (SV). METHODS: Simple data retrieval tasks were performed in an EHR simulation environment. A randomized block design was used. In the control group (SV), subjects retrieved lab results and medications by navigating to corresponding sections of the electronic record. In the intervention group (POV), subjects clicked on the name of the problem and immediately saw lab results and medications relevant to that problem. RESULTS: With POV, mean completion time was faster (173 seconds for POV vs 205 seconds for SV; P < .0001), the error rate was lower (3.4% for POV vs 7.7% for SV; P = .0010), user satisfaction was greater (System Usability Scale score 58.5 for POV vs 41.3 for SV; P < .0001), and cognitive task load was less (NASA Task Load Index score 0.72 for POV vs 0.99 for SV; P < .0001). DISCUSSION: The study demonstrates that using a problem-based auto-summary has a positive impact on 4 aspects of EHR data retrieval, including cognitive load. CONCLUSION: EHRs have brought on a data deluge, with increased cognitive load and physician burnout. To mitigate these increases, further development and implementation of auto-summarization functionality and the requisite knowledge base are needed.


Asunto(s)
Presentación de Datos , Registros Electrónicos de Salud , Registros Médicos Orientados a Problemas , Humanos , Almacenamiento y Recuperación de la Información , Interfaz Usuario-Computador , Flujo de Trabajo
11.
J Infect ; 82(1): 41-47, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33038385

RESUMEN

BACKGROUND: We created an electronic health record-based registry using automated data extraction tools to study the epidemiology of bloodstream infections (BSI) in solid organ transplant recipients. The overarching goal was to determine the usefulness of an electronic health record-based registry using data extraction tools for clinical research in solid organ transplantation. METHODS: We performed a retrospective single-center cohort study of adult solid organ transplant recipients from 2010 to 2015. Extraction tools were used to retrieve data from the electronic health record, which was integrated with national data sources. Electronic health records of subjects with positive blood cultures were manually adjudicated using consensus definitions. One-year cumulative incidence, risk factors for BSI acquisition, and 1-year mortality were analyzed by Kaplan-Meier method and Cox modeling, and 30-day mortality with logistic regression. RESULTS: In 917 solid organ transplant recipients the cumulative incidence of BSI was 8.4% (95% confidence interval 6.8-10.4) with central line-associated BSI as the most common source. The proportion of multidrug-resistant isolates increased from 0% in 2010 to 47% in 2015 (p = 0.03). BSI was the strongest risk factor for 1-year mortality (HR=8.44; 4.99-14.27; p<0.001). In 11 of 14 deaths, BSI was the main cause or contributory in patients with non-rapidly fatal underlying conditions. CONCLUSIONS: Our study illustrates the usefulness of an electronic health record-based registry using automated extraction tools for clinical research in the field of solid organ transplantation. A BSI reduces the 1-year survival of solid organ transplant recipients. The most common sources of BSIs in our studies are preventable.


Asunto(s)
Bacteriemia , Trasplante de Órganos , Sepsis , Adulto , Bacteriemia/epidemiología , Estudios de Cohortes , Humanos , Trasplante de Órganos/efectos adversos , Prueba de Estudio Conceptual , Sistema de Registros , Estudios Retrospectivos , Factores de Riesgo , Sepsis/epidemiología
12.
J Am Med Inform Assoc ; 26(11): 1344-1354, 2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31512730

RESUMEN

OBJECTIVE: We sought to demonstrate applicability of user stories, progressively elaborated by testable acceptance criteria, as lightweight requirements for agile development of clinical decision support (CDS). MATERIALS AND METHODS: User stories employed the template: As a [type of user], I want [some goal] so that [some reason]. From the "so that" section, CDS benefit measures were derived. Detailed acceptance criteria were elaborated through ensuing conversations. We estimated user story size with "story points," and depicted multiple user stories with a use case diagram or feature breakdown structure. Large user stories were split to fit into 2-week iterations. RESULTS: One example user story was: As a rheumatologist, I want to be advised if my patient with rheumatoid arthritis is not on a disease-modifying anti-rheumatic drug (DMARD), so that they receive optimal therapy and can experience symptom improvement. This yielded a process measure (DMARD use), and an outcome measure (Clinical Disease Activity Index). Following implementation, the DMARD nonuse rate decreased from 3.7% to 1.4%. Patients with a high Clinical Disease Activity Index improved from 13.7% to 7%. For a thromboembolism prevention CDS project, diagrams organized multiple user stories. DISCUSSION: User stories written in the clinician's voice aid CDS governance and lead naturally to measures of CDS effectiveness. Estimation of relative story size helps plan CDS delivery dates. User stories prove to be practical even on larger projects. CONCLUSIONS: User stories concisely communicate the who, what, and why of a CDS request, and serve as lightweight requirements for agile development to meet the demand for increasingly diverse CDS.


Asunto(s)
Recolección de Datos , Sistemas de Apoyo a Decisiones Clínicas , Narración , Registros Electrónicos de Salud , Humanos
13.
Stud Health Technol Inform ; 264: 1915-1916, 2019 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-31438405

RESUMEN

De-implementation of a 10-year EHR configuration resulted in over 50% decrease in the volume of the most-common InBasket message type received by PCPs. Pro-actively seeking out ways to not only (a) implement helpful new EHR features but (b) de-implement detrimental ones offers an opportunity to accelerate improvement in the S/N ratio and reduce clinician frustration and dissatisfaction with the EHR. Balancing governance decision agendas with de-implementation opportunities can enhance the clinician experience.


Asunto(s)
Registros Electrónicos de Salud , Relación Señal-Ruido
15.
J Am Med Inform Assoc ; 26(8-9): 703-713, 2019 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-31081898

RESUMEN

OBJECTIVE: Determine whether women and men differ in volunteering to join a Research Recruitment Registry when invited to participate via an electronic patient portal without human bias. MATERIALS AND METHODS: Under-representation of women and other demographic groups in clinical research studies could be due either to invitation bias (explicit or implicit) during screening and recruitment or by lower rates of deciding to participate when offered. By making an invitation to participate in a Research Recruitment Registry available to all patients accessing our patient portal, regardless of demographics, we sought to remove implicit bias in offering participation and thus independently assess agreement rates. RESULTS: Women were represented in the Research Recruitment Registry slightly more than their proportion of all portal users (n = 194 775). Controlling for age, race, ethnicity, portal use, chronic disease burden, and other questionnaire use, women were statistically more likely to agree to join the Registry than men (odds ratio 1.17, 95% CI, 1.12-1.21). In contrast, Black males, Hispanics (of both sexes), and particularly Asians (both sexes) had low participation-to-population ratios; this under-representation persisted in the multivariable regression model. DISCUSSION: This supports the view that historical under-representation of women in clinical studies is likely due, at least in part, to implicit bias in offering participation. Distinguishing the mechanism for under-representation could help in designing strategies to improve study representation, leading to more effective evidence-based recommendations. CONCLUSION: Patient portals offer an attractive option for minimizing bias and encouraging broader, more representative participation in clinical research.


Asunto(s)
Portales del Paciente , Selección de Paciente , Prejuicio , Adulto , Anciano , Estudios Transversales , Femenino , Equidad en Salud , Disparidades en Atención de Salud , Humanos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Sistema de Registros , Sexismo , Adulto Joven
16.
JMIR Med Inform ; 7(1): e11487, 2019 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-30664458

RESUMEN

BACKGROUND: Defining clinical phenotypes from electronic health record (EHR)-derived data proves crucial for clinical decision support, population health endeavors, and translational research. EHR diagnoses now commonly draw from a finely grained clinical terminology-either native SNOMED CT or a vendor-supplied terminology mapped to SNOMED CT concepts as the standard for EHR interoperability. Accordingly, electronic clinical quality measures (eCQMs) increasingly define clinical phenotypes with SNOMED CT value sets. The work of creating and maintaining list-based value sets proves daunting, as does insuring that their contents accurately represent the clinically intended condition. OBJECTIVE: The goal of the research was to compare an intensional (concept hierarchy-based) versus extensional (list-based) value set approach to defining clinical phenotypes using SNOMED CT-encoded data from EHRs by evaluating value set conciseness, time to create, and completeness. METHODS: Starting from published Centers for Medicare and Medicaid Services (CMS) high-priority eCQMs, we selected 10 clinical conditions referenced by those eCQMs. For each, the published SNOMED CT list-based (extensional) value set was downloaded from the Value Set Authority Center (VSAC). Ten corresponding SNOMED CT hierarchy-based intensional value sets for the same conditions were identified within our EHR. From each hierarchy-based intensional value set, an exactly equivalent full extensional value set was derived enumerating all included descendant SNOMED CT concepts. Comparisons were then made between (1) VSAC-downloaded list-based (extensional) value sets, (2) corresponding hierarchy-based intensional value sets for the same conditions, and (3) derived list-based (extensional) value sets exactly equivalent to the hierarchy-based intensional value sets. Value set conciseness was assessed by the number of SNOMED CT concepts needed for definition. Time to construct the value sets for local use was measured. Value set completeness was assessed by comparing contents of the downloaded extensional versus intensional value sets. Two measures of content completeness were made: for individual SNOMED CT concepts and for the mapped diagnosis clinical terms available for selection within the EHR by clinicians. RESULTS: The 10 hierarchy-based intensional value sets proved far simpler and faster to construct than exactly equivalent derived extensional value set lists, requiring a median 3 versus 78 concepts to define and 5 versus 37 minutes to build. The hierarchy-based intensional value sets also proved more complete: in comparison, the 10 downloaded 2018 extensional value sets contained a median of just 35% of the intensional value sets' SNOMED CT concepts and 65% of mapped EHR clinical terms. CONCLUSIONS: In the EHR era, defining conditions preferentially should employ SNOMED CT concept hierarchy-based (intensional) value sets rather than extensional lists. By doing so, clinical guideline and eCQM authors can more readily engage specialists in vetting condition subtypes to include and exclude, and streamline broad EHR implementation of condition-specific decision support promoting guideline adherence for patient benefit.

17.
Appl Clin Inform ; 9(3): 667-682, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-30157499

RESUMEN

BACKGROUND: Defining clinical conditions from electronic health record (EHR) data underpins population health activities, clinical decision support, and analytics. In an EHR, defining a condition commonly employs a diagnosis value set or "grouper." For constructing value sets, Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) offers high clinical fidelity, a hierarchical ontology, and wide implementation in EHRs as the standard interoperability vocabulary for problems. OBJECTIVE: This article demonstrates a practical approach to defining conditions with combinations of SNOMED CT concept hierarchies, and evaluates sharing of definitions for clinical and analytic uses. METHODS: We constructed diagnosis value sets for EHR patient registries using SNOMED CT concept hierarchies combined with Boolean logic, and shared them for clinical decision support, reporting, and analytic purposes. RESULTS: A total of 125 condition-defining "standard" SNOMED CT diagnosis value sets were created within our EHR. The median number of SNOMED CT concept hierarchies needed was only 2 (25th-75th percentiles: 1-5). Each value set, when compiled as an EHR diagnosis grouper, was associated with a median of 22 International Classification of Diseases (ICD)-9 and ICD-10 codes (25th-75th percentiles: 8-85) and yielded a median of 155 clinical terms available for selection by clinicians in the EHR (25th-75th percentiles: 63-976). Sharing of standard groupers for population health, clinical decision support, and analytic uses was high, including 57 patient registries (with 362 uses of standard groupers), 132 clinical decision support records, 190 rules, 124 EHR reports, 125 diagnosis dimension slicers for self-service analytics, and 111 clinical quality measure calculations. Identical SNOMED CT definitions were created in an EHR-agnostic tool enabling application across disparate organizations and EHRs. CONCLUSION: SNOMED CT-based diagnosis value sets are simple to develop, concise, understandable to clinicians, useful in the EHR and for analytics, and shareable. Developing curated SNOMED CT hierarchy-based condition definitions for public use could accelerate cross-organizational population health efforts, "smarter" EHR feature configuration, and clinical-translational research employing EHR-derived data.


Asunto(s)
Registros Electrónicos de Salud , Systematized Nomenclature of Medicine , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Programas Informáticos , Investigación Biomédica Traslacional
18.
JMIR Med Inform ; 6(2): e23, 2018 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-29653922

RESUMEN

BACKGROUND: Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." OBJECTIVE: We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). METHODS: Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. RESULTS: We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. CONCLUSIONS: Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization.

19.
Methods Inf Med ; 56(99): e74-e83, 2017 06 14.
Artículo en Inglés | MEDLINE | ID: mdl-28930362

RESUMEN

BACKGROUND: Creation of a new electronic health record (EHR)-based registry often can be a "one-off" complex endeavor: first developing new EHR data collection and clinical decision support tools, followed by developing registry-specific data extractions from the EHR for analysis. Each development phase typically has its own long development and testing time, leading to a prolonged overall cycle time for delivering one functioning registry with companion reporting into production. The next registry request then starts from scratch. Such an approach will not scale to meet the emerging demand for specialty registries to support population health and value-based care. OBJECTIVE: To determine if the creation of EHR-based specialty registries could be markedly accelerated by employing (a) a finite core set of EHR data collection principles and methods, (b) concurrent engineering of data extraction and data warehouse design using a common dimensional data model for all registries, and (c) agile development methods commonly employed in new product development. METHODS: We adopted as guiding principles to (a) capture data as a byproduct of care of the patient, (b) reinforce optimal EHR use by clinicians, (c) employ a finite but robust set of EHR data capture tool types, and (d) leverage our existing technology toolkit. Registries were defined by a shared condition (recorded on the Problem List) or a shared exposure to a procedure (recorded on the Surgical History) or to a medication (recorded on the Medication List). Any EHR fields needed - either to determine registry membership or to calculate a registry-associated clinical quality measure (CQM) - were included in the enterprise data warehouse (EDW) shared dimensional data model. Extract-transform-load (ETL) code was written to pull data at defined "grains" from the EHR into the EDW model. All calculated CQM values were stored in a single Fact table in the EDW crossing all registries. Registry-specific dashboards were created in the EHR to display both (a) real-time patient lists of registry patients and (b) EDW-generated CQM data. Agile project management methods were employed, including co-development, lightweight requirements documentation with User Stories and acceptance criteria, and time-boxed iterative development of EHR features in 2-week "sprints" for rapid-cycle feedback and refinement. RESULTS: Using this approach, in calendar year 2015 we developed a total of 43 specialty chronic disease registries, with 111 new EHR data collection and clinical decision support tools, 163 new clinical quality measures, and 30 clinic-specific dashboards reporting on both real-time patient care gaps and summarized and vetted CQM measure performance trends. CONCLUSIONS: This study suggests concurrent design of EHR data collection tools and reporting can quickly yield useful EHR structured data for chronic disease registries, and bodes well for efforts to migrate away from manual abstraction. This work also supports the view that in new EHR-based registry development, as in new product development, adopting agile principles and practices can help deliver valued, high-quality features early and often.


Asunto(s)
Registros Electrónicos de Salud/normas , Sistema de Registros/normas , Recolección de Datos , Documentación , Humanos , Programas Informáticos
20.
Health Innov Point Care Conf ; 2018: 56-59, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-30364762

RESUMEN

Even the most innovative healthcare technologies provide patient benefits only when adopted by clinicians and/or patients in actual practice. Yet realizing optimal positive impact from a new technology for the widest range of individuals who would benefit remains elusive. In software and new product development, iterative rapid-cycle "agile" methods more rapidly provide value, mitigate failure risks, and adapt to customer feedback. Co-development between builders and customers is a key agile principle. But how does one accomplish co-development with busy clinicians? In this paper, we discuss four practical agile co-development practices found helpful clinically: (1) User stories for lightweight requirements; (2) Time-boxed development for collaborative design and prompt course correction; (3) Automated acceptance test driven development, with clinician-vetted specifications; and (4) Monitoring of clinician interactions after release, for rapid-cycle product adaptation and evolution. In the coming wave of innovation in healthcare apps ushered in by open APIs to EHRs, learning rapidly what new product features work well for clinicians and patients will become even more crucial.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...